Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 3123, 2024 02 07.
Artigo em Inglês | MEDLINE | ID: mdl-38326488

RESUMO

As cardiovascular disorders are prevalent, there is a growing demand for reliable and precise diagnostic methods within this domain. Audio signal-based heart disease detection is a promising area of research that leverages sound signals generated by the heart to identify and diagnose cardiovascular disorders. Machine learning (ML) and deep learning (DL) techniques are pivotal in classifying and identifying heart disease from audio signals. This study investigates ML and DL techniques to detect heart disease by analyzing noisy sound signals. This study employed two subsets of datasets from the PASCAL CHALLENGE having real heart audios. The research process and visually depict signals using spectrograms and Mel-Frequency Cepstral Coefficients (MFCCs). We employ data augmentation to improve the model's performance by introducing synthetic noise to the heart sound signals. In addition, a feature ensembler is developed to integrate various audio feature extraction techniques. Several machine learning and deep learning classifiers are utilized for heart disease detection. Among the numerous models studied and previous study findings, the multilayer perceptron model performed best, with an accuracy rate of 95.65%. This study demonstrates the potential of this methodology in accurately detecting heart disease from sound signals. These findings present promising opportunities for enhancing medical diagnosis and patient care.


Assuntos
Doenças Cardiovasculares , Cardiopatias , Ruídos Cardíacos , Humanos , Inteligência Artificial , Redes Neurais de Computação , Cardiopatias/diagnóstico , Aprendizado de Máquina
2.
PeerJ Comput Sci ; 10: e1793, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38259893

RESUMO

The Internet of Things (IoT), considered an intriguing technology with substantial potential for tackling many societal concerns, has been developing into a significant component of the future. The foundation of IoT is the capacity to manipulate and track material objects over the Internet. The IoT network infrastructure is more vulnerable to attackers/hackers as additional features are accessible online. The complexity of cyberattacks has grown to pose a bigger threat to public and private sector organizations. They undermine Internet businesses, tarnish company branding, and restrict access to data and amenities. Enterprises and academics are contemplating using machine learning (ML) and deep learning (DL) for cyberattack avoidance because ML and DL show immense potential in several domains. Several DL teachings are implemented to extract various patterns from many annotated datasets. DL can be a helpful tool for detecting cyberattacks. Early network data segregation and detection thus become more essential than ever for mitigating cyberattacks. Numerous deep-learning model variants, including deep neural networks (DNNs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs), are implemented in the study to detect cyberattacks on an assortment of network traffic streams. The Canadian Institute for Cybersecurity's CICDIoT2023 dataset is utilized to test the efficacy of the proposed approach. The proposed method includes data preprocessing, robust scalar and label encoding techniques for categorical variables, and model prediction using deep learning models. The experimental results demonstrate that the RNN model achieved the highest accuracy of 96.56%. The test results indicate that the proposed approach is efficient compared to other methods for identifying cyberattacks in a realistic IoT environment.

3.
PLoS One ; 18(11): e0293061, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37939093

RESUMO

Predicting student performance automatically is of utmost importance, due to the substantial volume of data within educational databases. Educational data mining (EDM) devises techniques to uncover insights from data originating in educational settings. Artificial intelligence (AI) can mine educational data to predict student performance and provide measures to help students avoid failing and learn better. Learning platforms complement traditional learning settings by analyzing student performance, which can help reduce the chance of student failure. Existing methods for student performance prediction in educational data mining faced challenges such as limited accuracy, imbalanced data, and difficulties in feature engineering. These issues hindered effective adaptability and generalization across diverse educational contexts. This study proposes a machine learning-based system with deep convoluted features for the prediction of students' academic performance. The proposed framework is employed to predict student academic performance using balanced as well as, imbalanced datasets using the synthetic minority oversampling technique (SMOTE). In addition, the performance is also evaluated using the original and deep convoluted features. Experimental results indicate that the use of deep convoluted features provides improved prediction accuracy compared to original features. Results obtained using the extra tree classifier with convoluted features show the highest classification accuracy of 99.9%. In comparison with the state-of-the-art approaches, the proposed approach achieved higher performance. This research introduces a powerful AI-driven system for student performance prediction, offering substantial advancements in accuracy compared to existing approaches.


Assuntos
Desempenho Acadêmico , Inteligência Artificial , Humanos , Estudantes , Aprendizado de Máquina , Escolaridade
4.
PeerJ Comput Sci ; 9: e1332, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346725

RESUMO

For the past few years, the concept of the smart house has gained popularity. The major challenges concerning a smart home include data security, privacy issues, authentication, secure identification, and automated decision-making of Internet of Things (IoT) devices. Currently, existing home automation systems address either of these challenges, however, home automation that also involves automated decision-making systems and systematic features apart from being reliable and safe is an absolute necessity. The current study proposes a deep learning-driven smart home system that integrates a Convolutional neural network (CNN) for automated decision-making such as classifying the device as "ON" and "OFF" based on its utilization at home. Additionally, to provide a decentralized, secure, and reliable mechanism to assure the authentication and identification of the IoT devices we integrated the emerging blockchain technology into this study. The proposed system is fundamentally comprised of a variety of sensors, a 5 V relay circuit, and Raspberry Pi which operates as a server and maintains the database of each device being used. Moreover, an android application is developed which communicates with the Raspberry Pi interface using the Apache server and HTTP web interface. The practicality of the proposed system for home automation is tested and evaluated in the lab and in real-time to ensure its efficacy. The current study also assures that the technology and hardware utilized in the proposed smart house system are inexpensive, widely available, and scalable. Furthermore, the need for a more comprehensive security and privacy model to be incorporated into the design phase of smart homes is highlighted by a discussion of the risks analysis' implications including cyber threats, hardware security, and cyber attacks. The experimental results emphasize the significance of the proposed system and validate its usability in the real world.

5.
Sensors (Basel) ; 23(5)2023 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-36904963

RESUMO

The performance of human gait recognition (HGR) is affected by the partial obstruction of the human body caused by the limited field of view in video surveillance. The traditional method required the bounding box to recognize human gait in the video sequences accurately; however, it is a challenging and time-consuming approach. Due to important applications, such as biometrics and video surveillance, HGR has improved performance over the last half-decade. Based on the literature, the challenging covariant factors that degrade gait recognition performance include walking while wearing a coat or carrying a bag. This paper proposed a new two-stream deep learning framework for human gait recognition. The first step proposed a contrast enhancement technique based on the local and global filters information fusion. The high-boost operation is finally applied to highlight the human region in a video frame. Data augmentation is performed in the second step to increase the dimension of the preprocessed dataset (CASIA-B). In the third step, two pre-trained deep learning models-MobilenetV2 and ShuffleNet-are fine-tuned and trained on the augmented dataset using deep transfer learning. Features are extracted from the global average pooling layer instead of the fully connected layer. In the fourth step, extracted features of both streams are fused using a serial-based approach and further refined in the fifth step by using an improved equilibrium state optimization-controlled Newton-Raphson (ESOcNR) selection method. The selected features are finally classified using machine learning algorithms for the final classification accuracy. The experimental process was conducted on 8 angles of the CASIA-B dataset and obtained an accuracy of 97.3, 98.6, 97.7, 96.5, 92.9, 93.7, 94.7, and 91.2%, respectively. Comparisons were conducted with state-of-the-art (SOTA) techniques, and showed improved accuracy and reduced computational time.


Assuntos
Aprendizado Profundo , Humanos , Algoritmos , Marcha , Aprendizado de Máquina , Biometria/métodos
6.
Front Comput Neurosci ; 16: 1083649, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36507304

RESUMO

Leukemia (blood cancer) diseases arise when the number of White blood cells (WBCs) is imbalanced in the human body. When the bone marrow produces many immature WBCs that kill healthy cells, acute lymphocytic leukemia (ALL) impacts people of all ages. Thus, timely predicting this disease can increase the chance of survival, and the patient can get his therapy early. Manual prediction is very expensive and time-consuming. Therefore, automated prediction techniques are essential. In this research, we propose an ensemble automated prediction approach that uses four machine learning algorithms K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Random Forest (RF), and Naive Bayes (NB). The C-NMC leukemia dataset is used from the Kaggle repository to predict leukemia. Dataset is divided into two classes cancer and healthy cells. We perform data preprocessing steps, such as the first images being cropped using minimum and maximum points. Feature extraction is performed to extract the feature using pre-trained Convolutional Neural Network-based Deep Neural Network (DNN) architectures (VGG19, ResNet50, or ResNet101). Data scaling is performed by using the MinMaxScaler normalization technique. Analysis of Variance (ANOVA), Recursive Feature Elimination (RFE), and Random Forest (RF) as feature Selection techniques. Classification machine learning algorithms and ensemble voting are applied to selected features. Results reveal that SVM with 90.0% accuracy outperforms compared to other algorithms.

7.
Diagnostics (Basel) ; 13(1)2022 Dec 29.
Artigo em Inglês | MEDLINE | ID: mdl-36611393

RESUMO

BACKGROUND AND OBJECTIVE: In 2019, a corona virus disease (COVID-19) was detected in China that affected millions of people around the world. On 11 March 2020, the WHO declared this disease a pandemic. Currently, more than 200 countries in the world have been affected by this disease. The manual diagnosis of this disease using chest X-ray (CXR) images and magnetic resonance imaging (MRI) is time consuming and always requires an expert person; therefore, researchers introduced several computerized techniques using computer vision methods. The recent computerized techniques face some challenges, such as low contrast CTX images, the manual initialization of hyperparameters, and redundant features that mislead the classification accuracy. METHODS: In this paper, we proposed a novel framework for COVID-19 classification using deep Bayesian optimization and improved canonical correlation analysis (ICCA). In this proposed framework, we initially performed data augmentation for better training of the selected deep models. After that, two pre-trained deep models were employed (ResNet50 and InceptionV3) and trained using transfer learning. The hyperparameters of both models were initialized through Bayesian optimization. Both trained models were utilized for feature extractions and fused using an ICCA-based approach. The fused features were further optimized using an improved tree growth optimization algorithm that finally was classified using a neural network classifier. RESULTS: The experimental process was conducted on five publically available datasets and achieved an accuracy of 99.6, 98.5, 99.9, 99.5, and 100%. CONCLUSION: The comparison with recent methods and t-test-based analysis showed the significance of this proposed framework.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...